Recently, neural networks have achieved great success on sentimentclassification due to their ability to alleviate feature engineering. However,one of the remaining challenges is to model long texts in document-levelsentiment classification under a recurrent architecture because of thedeficiency of the memory unit. To address this problem, we present a CachedLong Short-Term Memory neural networks (CLSTM) to capture the overall semanticinformation in long texts. CLSTM introduces a cache mechanism, which dividesmemory into several groups with different forgetting rates and thus enables thenetwork to keep sentiment information better within a recurrent unit. Theproposed CLSTM outperforms the state-of-the-art models on three publiclyavailable document-level sentiment analysis datasets.
展开▼